Eecient Read-restricted Monotone Cnf/dnf Dualization by Learning with Membership Queries Produced as Part of the Esprit Working Group in Neural and Computational Learning Ii, Neurocolt2 27150
نویسندگان
چکیده
We consider exact learning monotone CNF formulas in which each variable appears at most some constant k times (\read-k" monotone CNF). Let f : f0;1g n ! f0;1g be expressible as a read-k monotone CNF formula for some natural number k. We give an incremental output polynomial time algorithm for exact learning both the read-k CNF and (not necessarily read restricted) DNF descriptions of f. The algorithm's only method of obtaining information about f is through membership queries, i.e., by inquiring about the value f (x) for points x 2 f0;1g n. The algorithm yields an incremental polynomial output time solution to the bounded degree hypergraph transversal problem, and the problem of (read-k) monotone CNF/DNF dualization, which remain open problems of importance in the unrestricted case.
منابع مشابه
Eecient Read-restricted Monotone Cnf/dnf Dualization by Learning with Membership Queries
We consider exact learning monotone CNF formulas in which each variable appears at most some constant k times (\read-k" monotone CNF). Let f : f0;1g n ! f0;1g be expressible as a read-k monotone CNF formula for some natural number k. We give an incremental output polynomial time algorithm for exact learning both the read-k CNF and (not necessarily read restricted) DNF descriptions of f. The alg...
متن کاملRead - Restricted Monotone CNF / DNFDualization by Learning with Membership
We consider exact learning monotone CNF formulas in which each variable appears at most some constant k times (\read-k" monotone CNF). Let f : f0;1g n ! f0;1g be expressible as a read-k monotone CNF formula for some natural number k. We give an incremental output polynomial time algorithm for exact learning both the read-k CNF and (not necessarily read restricted) DNF descriptions of f. The alg...
متن کاملDiscrete versus Analog Computation: Aspects of Studying the Same Problem in Diierent Computational Models Produced as Part of the Esprit Working Group in Neural and Computational Learning Ii, Neurocolt2 27150
In this tutorial we want to outline some of the features coming up when analyzing the same computational problems in diierent complexity theoretic frameworks. We will focus on two problems; the rst related to mathematical optimization and the second dealing with the intrinsic structure of complexity classes. Both examples serve well for working out in how far diierent approaches to the same pro...
متن کاملMultiplicative Updatings for Support-vector Learning Produced as Part of the Esprit Working Group in Neural and Computational Learning Ii, Neurocolt2 27150
Support Vector machines nd maximal margin hyperplanes in a high dimensional feature space. Theoretical results exist which guarantee a high generalization performance when the margin is large or when the number of support vectors is small. Multiplicative-Updating algorithms are a new tool for perceptron learning whose theoretical properties are well studied. In this work we present a Multiplica...
متن کاملDynamically Adapting Kernels in Support Vector Machines Produced as Part of the Esprit Working Group in Neural and Computational Learning Ii, Neurocolt2 27150
The kernel-parameter is one of the few tunable parameters in Support Vector machines, and controls the complexity of the resulting hypothesis. The choice of its value amounts to model selection, and is usually performed by means of a validation set. We present an algorithm which can automatically perform model selection and learning with no additional computational cost and with no need of a va...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1998